Chairwoman Johnson Opening Statement for Hearing on Managing the Risks of Artificial Intelligence
(Washington, DC) – Today, the House Committee on Science, Space, and Technology’s Subcommittee on Research and Technology is holding a hearing titled, Trustworthy AI: Managing the Risks of Artificial Intelligence.
Chairwoman Eddie Bernice Johnson’s (D-TX) opening statement as prepared for the record is below.
Thank you, Chairwoman Stevens and Ranking Member Feenstra, for holding today’s hearing. And welcome to our esteemed panel of witnesses.
We are here today to learn more about the development of trustworthy artificial intelligence and the work being done to reduce the risks posed by AI systems.
Recent advances in computing and software engineering, combined with an increase in the availability of data, have enabled rapid developments in the capabilities of AI systems. These systems are now deployed across every sector of our society and economy, including education, law enforcement, medicine, and transportation. These are sectors for which AI carries the potential for both great benefit, and great harm.
One significant risk across sectors is harmful bias, which can occur when an AI system produces results that are systemically prejudiced. Bias in AI can amplify, perpetuate, and exacerbate existing structural inequalities in our society, or create new ones. The bias may arise from non-representative training data, implicit biases in the humans who design the system, and many other factors. It is often the result of the complex interactions among the human, organizational, and technical factors involved in the development of AI systems. Consequently, the solution to these problems is not a purely technical one. We must ensure that the writing, testing, and deployment of AI systems is an inclusive, thoughtful and accountable process that results in AI that is safe, trustworthy, and free of harmful bias.
That goal remained central in our development of the National Artificial Intelligence Initiative Act, which I led alongside Ranking Member Lucas and which we enacted last Congress. In the National AI Initiative Act, we directed the National Science Foundation to support research and education in trustworthy AI. As we train the next generation of AI researchers, we must not treat ethics as something separate from technology development. The law specifically directs NSF to integrate ethics research and technology education from the earliest stages and establishes faculty fellowships in technology ethics. The recently enacted CHIPS and Science Act further directs NSF to require ethics statements in its award proposals to ensure researchers consider the potential societal implications of their work.
As we will learn more about today, the National AI Initiative Act also directed the National Institute of Standards and Technology to develop a framework for trustworthy AI, in addition to carrying out measurement research and standards development to enable the implementation of such a framework.
While AI systems continue to make rapid progress, the activities carried out under the National AI Initiative Act will be key to grappling with the sociotechnical questions posed by rapidly advancing AI systems.
I look forward to hearing more from our witnesses today and to discussing what more the United States can do to ensure we are the world leader in the development of trustworthy AI. Thank you, and I yield back my time.
Next Article Previous Article